skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Ait Khayi, Nisrine"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. null (Ed.)
    The transfer learning pretraining-finetuning  paradigm has revolutionized the natural language processing field yielding state-of the art results in  several subfields such as text classification and question answering. However, little work has been done investigating pretrained language models for the  open student answer assessment task. In this paper, we fine tune pretrained T5, BERT, RoBERTa, DistilBERT, ALBERT and XLNet models on the DT-Grade dataset which contains freely generated (or open) student answers together with judgment of their correctness. The experimental results demonstrated the effectiveness of these models based on the transfer learning pretraining-finetuning paradigm for open student answer assessment. An improvement of 8%-15% in accuracy was obtained over previous methods. Particularly, a T5 based method led to state-of-the-art results with an accuracy and F1 score of 0.88. 
    more » « less
  2. Graph Convolutional Networks have achieved impressive results in multiple NLP tasks such as text classification. However, this approach has not been explored yet for the student answer assessment task. In this work, we propose to use Graph Convolutional Networks to automatically assess freely generated student answers within the context of dialogue-based intelligent tutoring systems. We convert this task to a node classification task. First, we build a DTGrade graph where each node represents the concatenation of the student answer and its corresponding reference answer whereas the edges represent the relatedness between nodes. Second, the DTGrade graph is fed to two layers of Graph Convolutional Networks. Finally, the output of the second layer is fed to a softmax layer. The empirical results showed that our model reached the state-of-the-art results by obtaining an accuracy of 73%. 
    more » « less
  3. tInspired by Vaswani's transformer, we propose in this paper anattentionYbased transformer neural networ< with a multiYheadattention mechanism for the tas< of student answer assessment.Results show the competitiveness of our proposed model. Ahighest accuracy of 71.5% was achieved when using ELMoembeddings, 10 heads of attention, and 2 layers. This is verycompetitive and rivals the highest accuracy achieved by apreviously proposed BIYGRUYCapsnet deep networ< (72.5%)on the same dataset. The main advantages of using transformersover BIYGRUYCapsnet is reducing the training time and givingmore space for parallelization 
    more » « less
  4. This paper reports the findings of an empirical study on the effects and nature of self explanation during source code comprehension learning activities in the context of learning computer programming language Java. Our study shows that self explanation helps learning and there is a strong positive correlation between the volume of self-explanation students produce and how much they learn. Furthermore, selfexplanations as an instructional strategy has no discrepancy based on student’s prior knowledge. We found that participants explain target code examples using a combination of language, code references, and mathematical expressions. This is not surprising given the nature of the target item, a computer program, but this indicates that automatically evaluating such self-explanations may require novel techniques compared to self-explanations of narrative or scientific texts. 
    more » « less
  5. Motivated by the good results of capsule networks in text classification and other Natural Language Processing tasks, we present in this paper a Bi-GRU Capsule Networks model to automatically assess freely-generated student answers assessment within the context of dialogue-based intelligent tutoring systems. Our proposed model is composed of several important components: an embedding layer, a Bi-GRU layer, a capsule layer and a SoftMax layer. We have conducted a number of experiments considering a binary classification task: correct or incorrect answers. Our model has reached a highest accuracy of 72.50 when using an Elmo word embedding as detailed in the body of the paper. 
    more » « less
  6. Demo 
    more » « less